Daegu
- Asia > Middle East > Israel (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- (2 more...)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.05)
- North America > Canada > British Columbia > Vancouver (0.04)
- Asia > South Korea > Daegu > Daegu (0.04)
- (16 more...)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.05)
- Africa > Rwanda > Kigali > Kigali (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (8 more...)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > South Korea > Daegu > Daegu (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > South Korea > Daegu > Daegu (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > New York (0.04)
- (5 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.92)
- Instructional Material (0.67)
- Media > Film (1.00)
- Information Technology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.93)
- (5 more...)
- North America > United States > Tennessee > Knox County > Knoxville (0.14)
- Asia > South Korea > Daegu > Daegu (0.05)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Decoupled Q-Chunking
Li, Qiyang, Park, Seohong, Levine, Sergey
Temporal-difference (TD) methods learn state and action values efficiently by bootstrapping from their own future value predictions, but such a self-bootstrapping mechanism is prone to bootstrapping bias, where the errors in the value targets accumulate across steps and result in biased value estimates. Recent work has proposed to use chunked critics, which estimate the value of short action sequences ("chunks") rather than individual actions, speeding up value backup. However, extracting policies from chunked critics is challenging: policies must output the entire action chunk open-loop, which can be sub-optimal for environments that require policy reactivity and also challenging to model especially when the chunk length grows. Our key insight is to decouple the chunk length of the critic from that of the policy, allowing the policy to operate over shorter action chunks. We propose a novel algorithm that achieves this by optimizing the policy against a distilled critic for partial action chunks, constructed by optimistically backing up from the original chunked critic to approximate the maximum value achievable when a partial action chunk is extended to a complete one. This design retains the benefits of multi-step value propagation while sidestepping both the open-loop sub-optimality and the difficulty of learning action chunking policies for long action chunks. We evaluate our method on challenging, long-horizon offline goal-conditioned tasks and show that it reliably outperforms prior methods. Code: github.com/ColinQiyangLi/dqc.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Asia > South Korea > Daegu > Daegu (0.04)
- Research Report (1.00)
- Workflow (0.88)
CHyLL: Learning Continuous Neural Representations of Hybrid Systems
Teng, Sangli, Liu, Hang, Song, Jingyu, Sreenath, Koushil
Learning the flows of hybrid systems that have both continuous and discrete time dynamics is challenging. The existing method learns the dynamics in each discrete mode, which suffers from the combination of mode switching and discontinuities in the flows. In this work, we propose CHyLL (Continuous Hybrid System Learning in Latent Space), which learns a continuous neural representation of a hybrid system without trajectory segmentation, event functions, or mode switching. The key insight of CHyLL is that the reset map glues the state space at the guard surface, reformulating the state space as a piecewise smooth quotient manifold where the flow becomes spatially continuous. Building upon these insights and the embedding theorems grounded in differential topology, CHyLL concurrently learns a singularity-free neural embedding in a higher-dimensional space and the continuous flow in it. We showcase that CHyLL can accurately predict the flow of hybrid systems with superior accuracy and identify the topological invariants of the hybrid systems. Finally, we apply CHyLL to the stochastic optimal control problem.
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > South Korea > Daegu > Daegu (0.04)